情感分析 褚则伟 zeweichu@gmail.com
学习目标
学习和训练文本分类模型
学习torchtext的基本使用方法
学习torch.nn的一些基本模型
本notebook参考了https://github.com/bentrevett/pytorch-sentiment-analysis
在这份notebook中,我们会用PyTorch模型和TorchText再来做情感分析(检测一段文字的情感是正面的还是负面的)。我们会使用IMDb 数据集 ,即电影评论。
模型从简单到复杂,我们会依次构建:
Word Averaging模型
RNN/LSTM模型
CNN模型
准备数据
TorchText中的一个重要概念是Field
。Field
决定了你的数据会被怎样处理。在我们的情感分类任务中,我们所需要接触到的数据有文本字符串和两种情感,”pos”或者”neg”。
Field
的参数制定了数据会被怎样处理。
我们使用TEXT
field来定义如何处理电影评论,使用LABEL
field来处理两个情感类别。
我们的TEXT
field带有tokenize='spacy'
,这表示我们会用spaCy tokenizer来tokenize英文句子。如果我们不特别声明tokenize
这个参数,那么默认的分词方法是使用空格。
安装spaCy1 2 pip install -U spacy python -m spacy download en
LABEL
由LabelField
定义。这是一种特别的用来处理label的Field
。我们后面会解释dtype。
更多关于Fields
,参见https://github.com/pytorch/text/blob/master/torchtext/data/field.py
和之前一样,我们会设定random seeds使实验可以复现。
1 2 3 4 5 6 7 8 9 10 11 12 import torchimport torchtextfrom torchtext import dataSEED = 1234 torch.manual_seed(SEED) torch.cuda.manual_seed(SEED) torch.backends.cudnn.deterministic = True TEXT = data.Field(tokenize='spacy' ) LABEL = data.LabelField()
1 2 print(torch.__version__) print(torchtext.__version__)
1.1.0
0.3.1
TorchText支持很多常见的自然语言处理数据集。
下面的代码会自动下载IMDb数据集,然后分成train/test两个torchtext.datasets
类别。数据被前面的Fields
处理。IMDb数据集一共有50000电影评论,每个评论都被标注为正面的或负面的。
1 2 from torchtext import datasetstrain_data, test_data = datasets.IMDB.splits(TEXT, LABEL)
查看每个数据split有多少条数据。
1 2 print(f'Number of training examples: {len(train_data)} ' ) print(f'Number of testing examples: {len(test_data)} ' )
Number of training examples: 25000
Number of testing examples: 25000
查看一个example。
1 print(vars(train_data.examples[0 ]))
{'text': ['Brilliant', 'adaptation', 'of', 'the', 'novel', 'that', 'made', 'famous', 'the', 'relatives', 'of', 'Chilean', 'President', 'Salvador', 'Allende', 'killed', '.', 'In', 'the', 'environment', 'of', 'a', 'large', 'estate', 'that', 'arises', 'from', 'the', 'ruins', ',', 'becoming', 'a', 'force', 'to', 'abuse', 'and', 'exploitation', 'of', 'outrage', ',', 'a', 'luxury', 'estate', 'for', 'the', 'benefit', 'of', 'the', 'upstart', 'Esteban', 'Trueba', 'and', 'his', 'undeserved', 'family', ',', 'the', 'brilliant', 'Danish', 'director', 'Bille', 'August', 'recreates', ',', 'in', 'micro', ',', 'which', 'at', 'the', 'time', 'would', 'be', 'the', 'process', 'leading', 'to', 'the', 'greatest', 'infamy', 'of', 'his', 'story', 'to', 'the', 'hardened', 'Chilean', 'nation', ',', 'and', 'whose', 'main', 'character', 'would', 'Augusto', 'Pinochet', '(', 'Stephen', 'similarities', 'with', 'it', 'are', 'inevitable', ':', 'recall', ',', 'as', 'an', 'example', ',', 'that', 'image', 'of', 'the', 'senator', 'with', 'dark', 'glasses', 'that', 'makes', 'him', 'the', 'wink', 'to', 'the', 'general', 'to', 'begin', 'making', 'the', 'palace).<br', '/><br', '/>Bille', 'August', 'attends', 'an', 'exceptional', 'cast', 'in', 'the', 'Jeremy', 'protruding', 'Irons', ',', 'whose', 'character', 'changes', 'from', 'arrogance', 'and', 'extreme', 'cruelty', ',', 'the', 'hard', 'lesson', 'that', 'life', 'always', 'brings', 'us', 'to', 'almost', 'force', 'us', 'to', 'change', '.', 'In', 'Esteban', 'fully', 'applies', 'the', 'law', 'of', 'resonance', ',', 'with', 'great', 'wisdom', ',', 'Solomon', 'describes', 'in', 'these', 'words:"The', 'things', 'that', 'freckles', 'are', 'the', 'same', 'punishment', 'that', 'will', 'serve', 'you', '.', '"', '<', 'br', '/><br', '/>Unforgettable', 'Glenn', 'Close', 'playing', 'splint', ',', 'the', 'tainted', 'sister', 'of', 'Stephen', ',', 'whose', 'sin', ',', 'driven', 'by', 'loneliness', ',', 'spiritual', 'and', 'platonic', 'love', 'was', 'the', 'wife', 'of', 'his', 'cruel', 'snowy', 'brother', '.', 'Meryl', 'Streep', 'also', 'brilliant', ',', 'a', 'woman', 'whose', 'name', 'came', 'to', 'him', 'like', 'a', 'glove', 'Clara', '.', 'With', 'telekinetic', 'powers', ',', 'cognitive', 'and', 'mediumistic', ',', 'this', 'hardened', 'woman', ',', 'loyal', 'to', 'his', 'blunt', ',', 'conservative', 'husband', ',', 'is', 'an', 'indicator', 'of', 'character', 'and', 'self', '-', 'control', 'that', 'we', 'wish', 'for', 'ourselves', 'and', 'for', 'all', 'human', 'beings', '.', '<', 'br', '/><br', '/>Every', 'character', 'is', 'a', 'portrait', 'of', 'virtuosity', '(', 'as', 'Blanca', 'worthy', 'rebel', 'leader', 'Pedro', 'Segundo', 'unhappy', '...', ')', 'or', 'a', 'portrait', 'of', 'humiliation', ',', 'like', 'Stephen', 'Jr.', ',', 'the', 'bastard', 'child', 'of', 'Senator', ',', 'who', 'serves', 'as', 'an', 'instrument', 'for', 'the', 'return', 'of', 'the', 'boomerang', '.', '<', 'br', '/><br', '/>The', 'film', 'moves', 'the', 'bowels', ',', 'we', 'recreated', 'some', 'facts', 'that', 'should', 'not', 'ever', 'be', 'repeated', ',', 'but', 'that', 'absurdly', 'still', 'happen', '(', 'Colombia', 'is', 'a', 'sad', 'example', ')', 'and', 'another', 'reminder', 'that', ',', 'against', 'all', ',', 'life', 'is', 'wonderful', 'because', 'there', 'are', 'always', 'people', 'like', 'Isabel', 'Allende', 'and', 'immortalize', 'just', 'Bille', 'August', '.'], 'label': 'pos'}
由于我们现在只有train/test这两个分类,所以我们需要创建一个新的validation set。我们可以使用.split()
创建新的分类。
默认的数据分割是 70、30,如果我们声明split_ratio
,可以改变split之间的比例,split_ratio=0.8
表示80%的数据是训练集,20%是验证集。
我们还声明random_state
这个参数,确保我们每次分割的数据集都是一样的。
1 2 import randomtrain_data, valid_data = train_data.split(random_state=random.seed(SEED))
检查一下现在每个部分有多少条数据。
1 2 3 print(f'Number of training examples: {len(train_data)} ' ) print(f'Number of validation examples: {len(valid_data)} ' ) print(f'Number of testing examples: {len(test_data)} ' )
Number of training examples: 17500
Number of validation examples: 7500
Number of testing examples: 25000
下一步我们需要创建 vocabulary 。vocabulary 就是把每个单词一一映射到一个数字。
我们使用最常见的25k个单词来构建我们的单词表,用max_size
这个参数可以做到这一点。
所有其他的单词都用<unk>
来表示。
1 2 3 4 TEXT.build_vocab(train_data, max_size=25000 , vectors="glove.6B.100d" , unk_init=torch.Tensor.normal_) LABEL.build_vocab(train_data)
1 2 print(f"Unique tokens in TEXT vocabulary: {len(TEXT.vocab)} " ) print(f"Unique tokens in LABEL vocabulary: {len(LABEL.vocab)} " )
Unique tokens in TEXT vocabulary: 25002
Unique tokens in LABEL vocabulary: 2
当我们把句子传进模型的时候,我们是按照一个个 batch 穿进去的,也就是说,我们一次传入了好几个句子,而且每个batch中的句子必须是相同的长度。为了确保句子的长度相同,TorchText会把短的句子pad到和最长的句子等长。
下面我们来看看训练数据集中最常见的单词。
1 print(TEXT.vocab.freqs.most_common(20 ))
[('the', 201455), (',', 192552), ('.', 164402), ('a', 108963), ('and', 108649), ('of', 100010), ('to', 92873), ('is', 76046), ('in', 60904), ('I', 54486), ('it', 53405), ('that', 49155), ('"', 43890), ("'s", 43151), ('this', 42454), ('-', 36769), ('/><br', 35511), ('was', 34990), ('as', 30324), ('with', 29691)]
我们可以直接用 stoi
(s tring to i nt) 或者 itos
(i nt to s tring) 来查看我们的单词表。
1 print(TEXT.vocab.itos[:10 ])
['<unk>', '<pad>', 'the', ',', '.', 'a', 'and', 'of', 'to', 'is']
查看labels。
defaultdict(<function _default_unk_index at 0x7f6944d3f730>, {'neg': 0, 'pos': 1})
最后一步数据的准备是创建iterators。每个itartion都会返回一个batch的examples。
我们会使用BucketIterator
。BucketIterator
会把长度差不多的句子放到同一个batch中,确保每个batch中不出现太多的padding。
严格来说,我们这份notebook中的模型代码都有一个问题,也就是我们把<pad>
也当做了模型的输入进行训练。更好的做法是在模型中把由<pad>
产生的输出给消除掉。在这节课中我们简单处理,直接把<pad>
也用作模型输入了。由于<pad>
数量不多,模型的效果也不差。
如果我们有GPU,还可以指定每个iteration返回的tensor都在GPU上。
1 2 3 4 5 6 7 8 9 BATCH_SIZE = 64 device = torch.device('cuda' if torch.cuda.is_available() else 'cpu' ) train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits( (train_data, valid_data, test_data), batch_size=BATCH_SIZE, device=device, repeat=False )
1 batch = next(iter(train_iterator))
tensor([[ 65, 6706, 23, ..., 3101, 54, 87],
[ 52, 11017, 83, ..., 24113, 15, 1078],
[ 8, 3, 671, ..., 52, 73, 3],
...,
[ 1, 1, 1, ..., 1, 1, 1],
[ 1, 1, 1, ..., 1, 1, 1],
[ 1, 1, 1, ..., 1, 1, 1]], device='cuda:0')
1 2 3 PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token] mask = batch.text == PAD_IDX mask
tensor([[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[1, 1, 1, ..., 1, 1, 1],
[1, 1, 1, ..., 1, 1, 1],
[1, 1, 1, ..., 1, 1, 1]], device='cuda:0', dtype=torch.uint8)
Word Averaging模型
我们首先介绍一个简单的Word Averaging模型。这个模型非常简单,我们把每个单词都通过Embedding
层投射成word embedding vector,然后把一句话中的所有word vector做个平均,就是整个句子的vector表示了。接下来把这个sentence vector传入一个Linear
层,做分类即可。
我们使用avg_pool2d
来做average pooling。我们的目标是把sentence length那个维度平均成1,然后保留embedding这个维度。
avg_pool2d
的kernel size是 (embedded.shape[1]
, 1),所以句子长度的那个维度会被压扁。
1 2 3 4 5 6 7 8 9 10 11 12 13 import torch.nn as nnimport torch.nn.functional as Fclass WordAVGModel (nn.Module) : def __init__ (self, vocab_size, embedding_dim, output_dim, pad_idx) : super().__init__() self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx=pad_idx) self.fc = nn.Linear(embedding_dim, output_dim) def forward (self, text, mask) : embedded = self.embedding(text) sent_embed = torch.sum(embedded * mask.unsqueeze(2 ), 1 ) / mask.sum(1 ).unsqueeze(1 ) return self.fc(sent_embed)
1 2 3 4 5 INPUT_DIM = len(TEXT.vocab) EMBEDDING_DIM = 100 OUTPUT_DIM = 1 model = WordAVGModel(INPUT_DIM, EMBEDDING_DIM, OUTPUT_DIM, PAD_IDX)
1 2 3 4 def count_parameters (model) : return sum(p.numel() for p in model.parameters() if p.requires_grad) print(f'The model has {count_parameters(model):,} trainable parameters' )
The model has 2,500,301 trainable parameters
1 2 pretrained_embeddings = TEXT.vocab.vectors model.embedding.weight.data.copy_(pretrained_embeddings)
tensor([[-0.1117, -0.4966, 0.1631, ..., 1.2647, -0.2753, -0.1325],
[-0.8555, -0.7208, 1.3755, ..., 0.0825, -1.1314, 0.3997],
[-0.0382, -0.2449, 0.7281, ..., -0.1459, 0.8278, 0.2706],
...,
[-0.7244, -0.0186, 0.0996, ..., 0.0045, -1.0037, 0.6646],
[-1.1243, 1.2040, -0.6489, ..., -0.7526, 0.5711, 1.0081],
[ 0.0860, 0.1367, 0.0321, ..., -0.5542, -0.4557, -0.0382]])
1 2 3 4 UNK_IDX = TEXT.vocab.stoi[TEXT.unk_token] model.embedding.weight.data[UNK_IDX] = torch.zeros(EMBEDDING_DIM) model.embedding.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM)
训练模型 1 2 3 4 5 6 import torch.optim as optimoptimizer = optim.Adam(model.parameters()) criterion = nn.BCEWithLogitsLoss() model = model.to(device) criterion = criterion.to(device)
计算预测的准确率
1 2 3 4 5 6 7 8 9 10 def binary_accuracy (preds, y) : """ Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8 """ rounded_preds = torch.round(torch.sigmoid(preds)) correct = (rounded_preds == y).float() acc = correct.sum()/len(correct) return acc
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 def train (model, iterator, optimizer, criterion) : epoch_loss = 0 epoch_acc = 0 model.train() i = 0 for batch in iterator: optimizer.zero_grad() text = batch.text.permute(1 , 0 ) mask = 1. - (text == PAD_IDX).float() predictions = model(text, mask).squeeze(1 ) loss = criterion(predictions, batch.label.float()) acc = binary_accuracy(predictions, batch.label.float()) loss.backward() optimizer.step() if i % 100 == 0 : print("batch {}, loss {}" .format(i, loss.item())) i += 1 epoch_loss += loss.item() epoch_acc += acc.item() return epoch_loss / len(iterator), epoch_acc / len(iterator)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 def evaluate (model, iterator, criterion) : epoch_loss = 0 epoch_acc = 0 model.eval() i = 0 with torch.no_grad(): for batch in iterator: text = batch.text.permute(1 , 0 ) mask = 1. - (text == PAD_IDX).float() predictions = model(text, mask).squeeze(1 ) loss = criterion(predictions, batch.label.float()) acc = binary_accuracy(predictions, batch.label.float()) if i % 100 == 0 : print("batch {}, loss {}" .format(i, loss.item())) i += 1 epoch_loss += loss.item() epoch_acc += acc.item() return epoch_loss / len(iterator), epoch_acc / len(iterator)
1 2 3 4 5 6 7 import timedef epoch_time (start_time, end_time) : elapsed_time = end_time - start_time elapsed_mins = int(elapsed_time / 60 ) elapsed_secs = int(elapsed_time - (elapsed_mins * 60 )) return elapsed_mins, elapsed_secs
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 N_EPOCHS = 10 best_valid_loss = float('inf' ) for epoch in range(N_EPOCHS): start_time = time.time() train_loss, train_acc = train(model, train_iterator, optimizer, criterion) valid_loss, valid_acc = evaluate(model, valid_iterator, criterion) end_time = time.time() epoch_mins, epoch_secs = epoch_time(start_time, end_time) if valid_loss < best_valid_loss: best_valid_loss = valid_loss torch.save(model.state_dict(), 'wordavg-model.pt' ) print(f'Epoch: {epoch+1 :02 } | Epoch Time: {epoch_mins} m {epoch_secs} s' ) print(f'\tTrain Loss: {train_loss:.3 f} | Train Acc: {train_acc*100 :.2 f} %' ) print(f'\t Val. Loss: {valid_loss:.3 f} | Val. Acc: {valid_acc*100 :.2 f} %' )
batch 0, loss 0.6835149526596069
batch 100, loss 0.6759217977523804
batch 200, loss 0.6536192297935486
batch 0, loss 0.5802608132362366
batch 100, loss 0.6405552625656128
Epoch: 01 | Epoch Time: 0m 2s
Train Loss: 0.661 | Train Acc: 66.62%
Val. Loss: 0.615 | Val. Acc: 74.25%
batch 0, loss 0.6175215244293213
batch 100, loss 0.5193076133728027
batch 200, loss 0.523094654083252
batch 0, loss 0.41260701417922974
batch 100, loss 0.546144425868988
Epoch: 02 | Epoch Time: 0m 2s
Train Loss: 0.542 | Train Acc: 78.82%
Val. Loss: 0.482 | Val. Acc: 81.45%
batch 0, loss 0.48719578981399536
batch 100, loss 0.3965785503387451
batch 200, loss 0.4322021007537842
batch 0, loss 0.35118478536605835
batch 100, loss 0.46531984210014343
Epoch: 03 | Epoch Time: 0m 2s
Train Loss: 0.414 | Train Acc: 85.14%
Val. Loss: 0.391 | Val. Acc: 85.33%
batch 0, loss 0.31555071473121643
batch 100, loss 0.3576723039150238
batch 200, loss 0.43358099460601807
batch 0, loss 0.3284790515899658
batch 100, loss 0.4068619906902313
Epoch: 04 | Epoch Time: 0m 2s
Train Loss: 0.333 | Train Acc: 88.22%
Val. Loss: 0.341 | Val. Acc: 86.73%
batch 0, loss 0.21446196734905243
batch 100, loss 0.29952651262283325
batch 200, loss 0.33016496896743774
batch 0, loss 0.33019396662712097
batch 100, loss 0.372672975063324
Epoch: 05 | Epoch Time: 0m 2s
Train Loss: 0.284 | Train Acc: 90.08%
Val. Loss: 0.311 | Val. Acc: 87.85%
batch 0, loss 0.21933476626873016
batch 100, loss 0.20656771957874298
batch 200, loss 0.2411007285118103
batch 0, loss 0.3338389992713928
batch 100, loss 0.35051852464675903
Epoch: 06 | Epoch Time: 0m 2s
Train Loss: 0.248 | Train Acc: 91.57%
Val. Loss: 0.292 | Val. Acc: 88.37%
batch 0, loss 0.2381495237350464
batch 100, loss 0.3066502809524536
batch 200, loss 0.17593657970428467
batch 0, loss 0.33260178565979004
batch 100, loss 0.3287006616592407
Epoch: 07 | Epoch Time: 0m 2s
Train Loss: 0.220 | Train Acc: 92.62%
Val. Loss: 0.281 | Val. Acc: 88.89%
batch 0, loss 0.18733319640159607
batch 100, loss 0.2353360801935196
batch 200, loss 0.19918608665466309
batch 0, loss 0.34648358821868896
batch 100, loss 0.3191569447517395
Epoch: 08 | Epoch Time: 0m 2s
Train Loss: 0.197 | Train Acc: 93.63%
Val. Loss: 0.269 | Val. Acc: 89.23%
batch 0, loss 0.10634639114141464
batch 100, loss 0.11403544247150421
batch 200, loss 0.29342859983444214
batch 0, loss 0.35649430751800537
batch 100, loss 0.3183209300041199
Epoch: 09 | Epoch Time: 0m 2s
Train Loss: 0.177 | Train Acc: 94.27%
Val. Loss: 0.264 | Val. Acc: 89.26%
batch 0, loss 0.16292411088943481
batch 100, loss 0.08687698841094971
batch 200, loss 0.21162091195583344
batch 0, loss 0.3467680811882019
batch 100, loss 0.2997514605522156
Epoch: 10 | Epoch Time: 0m 2s
Train Loss: 0.160 | Train Acc: 94.98%
Val. Loss: 0.258 | Val. Acc: 89.72%
1 2 3 4 5 6 7 8 9 10 11 import spacynlp = spacy.load('en' ) def predict_sentiment (sentence) : tokenized = [tok.text for tok in nlp.tokenizer(sentence)] indexed = [TEXT.vocab.stoi[t] for t in tokenized] tensor = torch.LongTensor(indexed).to(device) text = tensor.unsqueeze(0 ) mask = 1. - (text == PAD_IDX).float() prediction = torch.sigmoid(model(tensor, mask)) return prediction.item()
1 predict_sentiment("This film is terrible" )
2.4536811471520537e-10
1 predict_sentiment("This film is great" )
1.0
1 2 3 model.load_state_dict(torch.load('wordavg-model.pt' )) test_loss, test_acc = evaluate(model, test_iterator, criterion) print(f'Test Loss: {test_loss:.3 f} | Test Acc: {test_acc*100 :.2 f} %' )
batch 0, loss 0.32167336344718933
batch 100, loss 0.34431976079940796
batch 200, loss 0.18615691363811493
batch 300, loss 0.37860944867134094
Test Loss: 0.290 | Test Acc: 88.03%
RNN模型
下面我们尝试把模型换成一个recurrent neural network (RNN)。RNN经常会被用来encode一个sequence $$h_t = \text{RNN}(x_t, h_{t-1})$$
我们使用最后一个hidden state $h_T$来表示整个句子。
然后我们把$h_T$通过一个线性变换$f$,然后用来预测句子的情感。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 class RNN (nn.Module) : def __init__ (self, vocab_size, embedding_dim, hidden_dim, output_dim, n_layers, bidirectional, dropout, pad_idx, avg_hidden=True) : super().__init__() self.bidirectional = bidirectional self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx=pad_idx) self.rnn = nn.LSTM(embedding_dim, hidden_dim, num_layers=n_layers, bidirectional=bidirectional, dropout=dropout) self.fc = nn.Linear(hidden_dim*2 if self.bidirectional else hidden_dim, output_dim) self.dropout = nn.Dropout(dropout) self.avg_hidden = avg_hidden def forward (self, text, mask) : embedded = self.dropout(self.embedding(text)) seq_length = mask.sum(1 ) embedded = torch.nn.utils.rnn.pack_padded_sequence( input=embedded, lengths=seq_length, batch_first=True , enforce_sorted=False ) output, (hidden, cell) = self.rnn(embedded) output, seq_length = torch.nn.utils.rnn.pad_packed_sequence( sequence=output, batch_first=True , padding_value=0 , total_length=mask.shape[1 ] ) if self.avg_hidden: hidden = torch.sum(output * mask.unsqueeze(2 ), 1 ) / torch.sum(mask, 1 , keepdim=True ) else : if self.bidirectional: hidden = torch.cat((hidden[-2 ,:,:], hidden[-1 ,:,:]), dim=1 ) else : hidden = self.dropout(hidden[-1 ,:,:]) hidden = self.dropout(hidden) return self.fc(hidden)
1 2 3 4 5 6 7 8 9 10 11 INPUT_DIM = len(TEXT.vocab) EMBEDDING_DIM = 100 HIDDEN_DIM = 256 OUTPUT_DIM = 1 N_LAYERS = 2 BIDIRECTIONAL = True DROPOUT = 0.5 PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token] model = RNN(INPUT_DIM, EMBEDDING_DIM, HIDDEN_DIM, OUTPUT_DIM, N_LAYERS, BIDIRECTIONAL, DROPOUT, PAD_IDX, avg_hidden=False )
1 print(f'The model has {count_parameters(model):,} trainable parameters' )
The model has 4,810,857 trainable parameters
1 2 3 4 5 6 7 model.embedding.weight.data.copy_(pretrained_embeddings) UNK_IDX = TEXT.vocab.stoi[TEXT.unk_token] model.embedding.weight.data[UNK_IDX] = torch.zeros(EMBEDDING_DIM) model.embedding.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM) print(model.embedding.weight.data)
tensor([[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[-0.0382, -0.2449, 0.7281, ..., -0.1459, 0.8278, 0.2706],
...,
[-0.7244, -0.0186, 0.0996, ..., 0.0045, -1.0037, 0.6646],
[-1.1243, 1.2040, -0.6489, ..., -0.7526, 0.5711, 1.0081],
[ 0.0860, 0.1367, 0.0321, ..., -0.5542, -0.4557, -0.0382]])
训练RNN模型 1 2 optimizer = optim.Adam(model.parameters()) model = model.to(device)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 N_EPOCHS = 5 best_valid_loss = float('inf' ) for epoch in range(N_EPOCHS): start_time = time.time() train_loss, train_acc = train(model, train_iterator, optimizer, criterion) valid_loss, valid_acc = evaluate(model, valid_iterator, criterion) end_time = time.time() epoch_mins, epoch_secs = epoch_time(start_time, end_time) if valid_loss < best_valid_loss: best_valid_loss = valid_loss torch.save(model.state_dict(), 'lstm-model.pt' ) print(f'Epoch: {epoch+1 :02 } | Epoch Time: {epoch_mins} m {epoch_secs} s' ) print(f'\tTrain Loss: {train_loss:.3 f} | Train Acc: {train_acc*100 :.2 f} %' ) print(f'\t Val. Loss: {valid_loss:.3 f} | Val. Acc: {valid_acc*100 :.2 f} %' )
batch 0, loss 0.6940298080444336
batch 100, loss 0.6605077981948853
batch 200, loss 0.5677657723426819
batch 0, loss 0.6464325189590454
batch 100, loss 0.7902224659919739
Epoch: 01 | Epoch Time: 1m 1s
Train Loss: 0.651 | Train Acc: 61.65%
Val. Loss: 0.717 | Val. Acc: 52.98%
batch 0, loss 0.7926035523414612
batch 100, loss 0.7492727637290955
batch 200, loss 0.7025203704833984
batch 0, loss 0.6599957942962646
batch 100, loss 0.6523773670196533
Epoch: 02 | Epoch Time: 1m 1s
Train Loss: 0.673 | Train Acc: 57.12%
Val. Loss: 0.659 | Val. Acc: 61.20%
batch 0, loss 0.64130699634552
batch 100, loss 0.6027564406394958
batch 200, loss 0.6683254837989807
batch 0, loss 0.5396684408187866
batch 100, loss 0.5652653574943542
Epoch: 03 | Epoch Time: 1m 2s
Train Loss: 0.610 | Train Acc: 66.25%
Val. Loss: 0.597 | Val. Acc: 68.90%
batch 0, loss 0.580141544342041
batch 100, loss 0.2638660669326782
batch 200, loss 0.4949319064617157
batch 0, loss 0.3330756723880768
batch 100, loss 0.39001500606536865
Epoch: 04 | Epoch Time: 1m 1s
Train Loss: 0.479 | Train Acc: 77.27%
Val. Loss: 0.378 | Val. Acc: 84.53%
batch 0, loss 0.4124695062637329
batch 100, loss 0.5047512054443359
batch 200, loss 0.4246818423271179
batch 0, loss 0.3377535343170166
batch 100, loss 0.29955512285232544
Epoch: 05 | Epoch Time: 1m 1s
Train Loss: 0.343 | Train Acc: 85.52%
Val. Loss: 0.309 | Val. Acc: 87.28%
我们来尝试一个AVG的版本 1 2 3 4 5 6 7 8 9 10 11 INPUT_DIM = len(TEXT.vocab) EMBEDDING_DIM = 100 HIDDEN_DIM = 256 OUTPUT_DIM = 1 N_LAYERS = 2 BIDIRECTIONAL = True DROPOUT = 0.5 PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token] rnn_model_avg = RNN(INPUT_DIM, EMBEDDING_DIM, HIDDEN_DIM, OUTPUT_DIM, N_LAYERS, BIDIRECTIONAL, DROPOUT, PAD_IDX)
1 print(f'The model has {count_parameters(rnn_model_avg):,} trainable parameters' )
The model has 4,810,857 trainable parameters
1 2 3 4 5 6 7 rnn_model_avg.embedding.weight.data.copy_(pretrained_embeddings) UNK_IDX = TEXT.vocab.stoi[TEXT.unk_token] rnn_model_avg.embedding.weight.data[UNK_IDX] = torch.zeros(EMBEDDING_DIM) rnn_model_avg.embedding.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM) print(rnn_model_avg.embedding.weight.data)
tensor([[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[-0.0382, -0.2449, 0.7281, ..., -0.1459, 0.8278, 0.2706],
...,
[-0.7244, -0.0186, 0.0996, ..., 0.0045, -1.0037, 0.6646],
[-1.1243, 1.2040, -0.6489, ..., -0.7526, 0.5711, 1.0081],
[ 0.0860, 0.1367, 0.0321, ..., -0.5542, -0.4557, -0.0382]])
训练RNN模型 1 2 optimizer = optim.Adam(rnn_model_avg.parameters()) rnn_model_avg = rnn_model_avg.to(device)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 N_EPOCHS = 5 best_valid_loss = float('inf' ) for epoch in range(N_EPOCHS): start_time = time.time() train_loss, train_acc = train(rnn_model_avg, train_iterator, optimizer, criterion) valid_loss, valid_acc = evaluate(rnn_model_avg, valid_iterator, criterion) end_time = time.time() epoch_mins, epoch_secs = epoch_time(start_time, end_time) if valid_loss < best_valid_loss: best_valid_loss = valid_loss torch.save(rnn_model_avg.state_dict(), 'lstm-avg-model.pt' ) print(f'Epoch: {epoch+1 :02 } | Epoch Time: {epoch_mins} m {epoch_secs} s' ) print(f'\tTrain Loss: {train_loss:.3 f} | Train Acc: {train_acc*100 :.2 f} %' ) print(f'\t Val. Loss: {valid_loss:.3 f} | Val. Acc: {valid_acc*100 :.2 f} %' )
batch 0, loss 0.6885155439376831
batch 100, loss 0.5888913869857788
batch 200, loss 0.4656108617782593
batch 0, loss 0.4603933095932007
batch 100, loss 0.38754644989967346
Epoch: 01 | Epoch Time: 1m 20s
Train Loss: 0.528 | Train Acc: 72.70%
Val. Loss: 0.362 | Val. Acc: 84.47%
batch 0, loss 0.29848513007164
batch 100, loss 0.27336984872817993
batch 200, loss 0.35852643847465515
batch 0, loss 0.4745270907878876
batch 100, loss 0.32764753699302673
Epoch: 02 | Epoch Time: 1m 20s
Train Loss: 0.342 | Train Acc: 85.55%
Val. Loss: 0.294 | Val. Acc: 88.03%
batch 0, loss 0.31138738989830017
batch 100, loss 0.3301498591899872
batch 200, loss 0.5036394596099854
batch 0, loss 0.36463940143585205
batch 100, loss 0.3079427480697632
Epoch: 03 | Epoch Time: 1m 20s
Train Loss: 0.276 | Train Acc: 88.91%
Val. Loss: 0.257 | Val. Acc: 89.85%
batch 0, loss 0.19154249131679535
batch 100, loss 0.24453845620155334
batch 200, loss 0.2616804540157318
batch 0, loss 0.4100673198699951
batch 100, loss 0.29790183901786804
Epoch: 04 | Epoch Time: 1m 20s
Train Loss: 0.230 | Train Acc: 91.25%
Val. Loss: 0.250 | Val. Acc: 90.16%
batch 0, loss 0.21265330910682678
batch 100, loss 0.34193551540374756
batch 200, loss 0.19812607765197754
batch 0, loss 0.3696991205215454
batch 100, loss 0.30417782068252563
Epoch: 05 | Epoch Time: 1m 20s
Train Loss: 0.202 | Train Acc: 92.49%
Val. Loss: 0.251 | Val. Acc: 90.26%
You may have noticed the loss is not really decreasing and the accuracy is poor. This is due to several issues with the model which we’ll improve in the next notebook.
Finally, the metric we actually care about, the test loss and accuracy, which we get from our parameters that gave us the best validation loss.
1 2 3 rnn_model_avg.load_state_dict(torch.load('lstm-avg-model.pt' )) test_loss, test_acc = evaluate(model, test_iterator, criterion) print(f'Test Loss: {test_loss:.3 f} | Test Acc: {test_acc*100 :.2 f} %' )
batch 0, loss 0.24243609607219696
batch 100, loss 0.28235137462615967
batch 200, loss 0.20145781338214874
batch 300, loss 0.3198160231113434
Test Loss: 0.335 | Test Acc: 85.88%
CNN模型 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 class CNN (nn.Module) : def __init__ (self, vocab_size, embedding_dim, n_filters, filter_sizes, output_dim, dropout, pad_idx) : super().__init__() self.filter_sizes = filter_sizes self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx=pad_idx) self.convs = nn.ModuleList([ nn.Conv2d(in_channels = 1 , out_channels = n_filters, kernel_size = (fs, embedding_dim)) for fs in filter_sizes ]) self.fc = nn.Linear(len(filter_sizes) * n_filters, output_dim) self.dropout = nn.Dropout(dropout) def forward (self, text, mask) : embedded = self.embedding(text) embedded = embedded.unsqueeze(1 ) conved = [F.relu(conv(embedded)).squeeze(3 ) for conv in self.convs] conved = [conv.masked_fill((1. -mask[:, :-filter_size+1 ]).unsqueeze(1 ).byte(), -999999 ) \ for (conv, filter_size) in zip(conved, self.filter_sizes)] pooled = [F.max_pool1d(conv, conv.shape[2 ]).squeeze(2 ) for conv in conved] cat = self.dropout(torch.cat(pooled, dim=1 )) return self.fc(cat)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 INPUT_DIM = len(TEXT.vocab) EMBEDDING_DIM = 100 N_FILTERS = 100 FILTER_SIZES = [3 ,4 ,5 ] OUTPUT_DIM = 1 DROPOUT = 0.5 PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token] model = CNN(INPUT_DIM, EMBEDDING_DIM, N_FILTERS, FILTER_SIZES, OUTPUT_DIM, DROPOUT, PAD_IDX) model.embedding.weight.data.copy_(pretrained_embeddings) UNK_IDX = TEXT.vocab.stoi[TEXT.unk_token] model.embedding.weight.data[UNK_IDX] = torch.zeros(EMBEDDING_DIM) model.embedding.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM) model = model.to(device)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 optimizer = optim.Adam(model.parameters()) criterion = nn.BCEWithLogitsLoss() criterion = criterion.to(device) N_EPOCHS = 5 best_valid_loss = float('inf' ) for epoch in range(N_EPOCHS): start_time = time.time() train_loss, train_acc = train(model, train_iterator, optimizer, criterion) valid_loss, valid_acc = evaluate(model, valid_iterator, criterion) end_time = time.time() epoch_mins, epoch_secs = epoch_time(start_time, end_time) if valid_loss < best_valid_loss: best_valid_loss = valid_loss torch.save(model.state_dict(), 'CNN-model.pt' ) print(f'Epoch: {epoch+1 :02 } | Epoch Time: {epoch_mins} m {epoch_secs} s' ) print(f'\tTrain Loss: {train_loss:.3 f} | Train Acc: {train_acc*100 :.2 f} %' ) print(f'\t Val. Loss: {valid_loss:.3 f} | Val. Acc: {valid_acc*100 :.2 f} %' )
batch 0, loss 0.7456250190734863
batch 100, loss 0.7356712818145752
batch 200, loss 0.608451247215271
batch 0, loss 0.5171981453895569
batch 100, loss 0.5627424716949463
Epoch: 01 | Epoch Time: 0m 11s
Train Loss: 0.653 | Train Acc: 61.19%
Val. Loss: 0.511 | Val. Acc: 78.05%
batch 0, loss 0.5206002593040466
batch 100, loss 0.4522325098514557
batch 200, loss 0.39397668838500977
batch 0, loss 0.36625632643699646
batch 100, loss 0.34350645542144775
Epoch: 02 | Epoch Time: 0m 11s
Train Loss: 0.430 | Train Acc: 80.41%
Val. Loss: 0.356 | Val. Acc: 85.21%
batch 0, loss 0.3453408479690552
batch 100, loss 0.3106832504272461
batch 200, loss 0.29214251041412354
batch 0, loss 0.34314772486686707
batch 100, loss 0.27926790714263916
Epoch: 03 | Epoch Time: 0m 11s
Train Loss: 0.305 | Train Acc: 87.17%
Val. Loss: 0.318 | Val. Acc: 86.52%
batch 0, loss 0.2820616066455841
batch 100, loss 0.2185526192188263
batch 200, loss 0.2295588254928589
batch 0, loss 0.3212977647781372
batch 100, loss 0.2501620352268219
Epoch: 04 | Epoch Time: 0m 11s
Train Loss: 0.222 | Train Acc: 91.30%
Val. Loss: 0.311 | Val. Acc: 87.25%
batch 0, loss 0.06584674119949341
batch 100, loss 0.1338910311460495
batch 200, loss 0.22213703393936157
batch 0, loss 0.32934656739234924
batch 100, loss 0.2596980333328247
Epoch: 05 | Epoch Time: 0m 11s
Train Loss: 0.162 | Train Acc: 94.01%
Val. Loss: 0.318 | Val. Acc: 87.29%
1 2 3 model.load_state_dict(torch.load('CNN-model.pt' )) test_loss, test_acc = evaluate(model, test_iterator, criterion) print(f'Test Loss: {test_loss:.3 f} | Test Acc: {test_acc*100 :.2 f} %' )
batch 0, loss 0.1641087532043457
batch 100, loss 0.38564836978912354
batch 200, loss 0.26448047161102295
batch 300, loss 0.4913085401058197
Test Loss: 0.350 | Test Acc: 85.04%